Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 11 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What would you use if you couldn't use your current distribution/operating system?

  • Linux
  • Windows
  • BSD
  • ChromeOS / Android
  • macOS / iOS
  • Open[DOS, Solaris, STEP, VMS]
  • I don't use a computer you insensitive clod!
  • Other (describe in comments)

[ Results | Polls ]
Comments:116 | Votes:131

posted by hubie on Monday March 02, @08:12PM   Printer-friendly
from the dual-purpose dept.

MotorTrend reports https://www.motortrend.com/news/kia-plant-solar-power-hail-protection that the Kia assembly plant in Georgia suffered very expensive hail damage to new cars waiting to be shipped, back in a storm in 2023. The fix is a massive raised solar array of 3.2 million square feet (300,000 meters^2) over the car park/storage area.

The system has about 17,000 solar panels on the columns of a structure that is large enough to protect about 15,000 vehicles from the elements until they are loaded onto trucks or rail cars for delivery. Hail damage costs billions of dollars a year.

The panels are not all connected yet. Construction began in 2024 and the goal was to be done in the first quarter of 2026 but panels are still being installed. It should be finished this spring.

VPS [Vehicle Protection Structures] has provided this kind of protection to dealerships, but this is the first large-scale execution for an assembly plant.

The partnership is also working with Georgia Power to optimize energy production and integrate the power generated by the solar panels into the plant. The panels will be capable of supplying 10 percent of the plant's energy needs. The project also provided credits under the U.S. Inflation Reduction Act until that act was terminated.

Pics at the link, sort of like large "pop-up" shelters. To your AC submitter it's quite attractive.

Insuring the solar panels for hail damage seems like it would be cheaper than insurance to cover the same area of cars.


Original Submission

posted by hubie on Monday March 02, @03:23PM   Printer-friendly

https://arstechnica.com/tech-policy/2026/02/whoops-us-military-laser-strike-takes-down-cbp-drone-near-mexican-border/

The US military mistakenly shot down a Customs and Border Protection (CBP) drone near the Mexican border in a strike that reportedly used a laser-based anti-drone system. The CBP uses drones to track people crossing the border.

"Congressional aides told Reuters the Pentagon used the high-energy laser system to shoot down a Customs and Border Protection drone near the Mexican border, in an area that often has incursions from Mexican drones used by drug cartels," Reuters reported last night.

[...] "The Defense Department didn't realize the drone was being flown by CBP when it shot it down," and "had not first coordinated the use of the laser system with the US Federal Aviation Administration," Bloomberg wrote, citing anonymous sources.

The military hasn't been coordinating counter-drone measures with the FAA, and "CBP drone operators didn't inform the military's laser unit that it was launching," Bloomberg wrote, citing anonymous sources. Because the CBP didn't notify the Defense Department, the military viewed the aircraft as "an unknown drone," the Times wrote, citing an unnamed Pentagon official.

The latest incident came about two weeks after the FAA abruptly closed airspace over El Paso for a few hours, leading to flight cancellations. In the early February incident, CBP was the one that fired the laser. The CBP was "using the same technology on loan from the military to combat drug-smuggling" and "fired a high-energy laser at what they thought was a drone," but turned out to be a party balloon, the Times wrote.

"In both cases, the lasers were used without the FAA's approval, which many aviation safety experts maintain is a violation of the law," the Times wrote.

[...] The Pentagon, CBP, and FAA confirmed some details of the incident in a joint statement provided to Ars by the Pentagon today. The statement said the "engagement occurred when the Department of War employed counter-unmanned aircraft system authorities to mitigate a seemingly threatening unmanned aerial system operating within military airspace. The engagement took place far away from populated areas and there were no commercial aircraft in the vicinity."

[...] The statement did not mention that the drone was a CBP drone, and the Pentagon declined to provide further details to Ars.


Original Submission

posted by hubie on Monday March 02, @10:41AM   Printer-friendly

Trump Bans Anthropic AI From Federal Agencies After Firm Refuses to Unlock Capabilities

Anthropic cites risks of autonomous military applications, mass domestic surveillance:

President Donald Trump ordered every U.S. federal agency to stop using technology from AI company Anthropic on Friday, February 27, posting the directive to Truth Social at 3:47 PM ET — more than an hour before the Pentagon's own 5:01 PM ET deadline for Anthropic to comply with its demands.

“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS,” Mr. Trump fumed on Truth Social, adding that he is directing every U.S. federal agency to “IMMEDIATELY CEASE all use of Anthropic's technology.”

[...] After months of private talks collapsed into a public standoff this week, Amodei said Thursday his company "cannot in good conscience accede" to the DoD's terms. The Pentagon responded by threatening to invoke the Korean War-era Defense Production Act to compel Anthropic's compliance and warned it would designate the company a "supply chain risk" — a label typically reserved for companies from adversarial nations such as Huawei.

[...] Claude was the only AI model approved for use in classified military systems, and defense software firm Palantir, which uses Claude to power its most sensitive government contracts, will need to find a replacement quickly. OpenAI CEO Sam Altman said Friday he shares Anthropic's position on autonomous weapons' ethical “red lines,” complicating its candidacy as a direct replacement.

Also see:
    • Trump Slams Anthropic as 'Woke,' Orders Feds to Stop Using Claude AI
    • Claude won't be allowed to engage in mass surveillance or power fully autonomous weapons

Meanwhile:

OpenAI to work with Pentagon after Anthropic dropped by Trump over company's ethics concerns

CEO Sam Altman claims military will not use AI product for autonomous killing systems or mass surveillance:

OpenAI said it had struck a deal with the Pentagon to supply AI to classified US military networks, hours after Donald Trump ordered the government to stop using the services of one of the company's main competitors.

Sam Altman, OpenAI's CEO, announced the move on Friday night. It came after an agreement between Anthropic, a rival AI company that runs the Claude system, and the Trump administration broke down after Anthropic sought assurances its technology would not be used for mass surveillance – nor for autonomous weapons systems that can kill people without human input.

Announcing the deal, Altman insisted that OpenAI's agreement with the government included assurances that it would not be used to those ends.

[...] If OpenAI's deal does prohibit its systems from being used for unethical ends, it would appear the company has succeeded in receiving assurances where Anthropic could not. Altman announced the deal with the government shortly after the president said he would direct all federal agencies to "IMMEDIATELY CEASE" all use of Anthropic technology.

[...] It remains to be seen how OpenAI staff respond to the government deal. In its battle with the Trump administration, Anthropic has drawn support from its most fierce rivals. Nearly 500 OpenAI and Google employees signed on to an open letter saying "we will not be divided".

"The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused," the letter reads. "They're trying to divide each company with fear that the other will give in."


Original Submission #1Original Submission #2

posted by hubie on Monday March 02, @05:58AM   Printer-friendly

There's a silent vulnerability lurking underneath the architecture of Wi-Fi networks:

A team of researchers from the University of California, Riverside revealed a series of weaknesses in existing Wi-Fi security, allowing them to intercept data on a network infrastructure that they've already connected to, even with client isolation in place.

The group called this vulnerability, AirSnitch, and, according to their paper [PDF], it exploits inherent weaknesses in the networking stack. Since Wi-Fi does not cryptographically link client MAC addresses, Wi-Fi encryption keys, and IP addresses through Layers 1, 2, and 3 of the network stack, an attacker can use this to assume the identity of another device and confuse the network into diverting downlink and uplink traffic through it.

Xin'an Zhou, the lead author on the research, said in an interview, according to Ars Technica, that AirSnitch "breaks worldwide Wi-Fi encryption, and it might have the potential to enable advanced cyberattacks." He also added, "Advanced attacks can build on our primitives to [perform] cookie stealing, DNS and cache poisoning. Our research physically wiretaps the wire altogether so these sophisticated attacks will work. It's really a threat to worldwide network security."

AirSnitch does not break encryption at all, but it challenges the general assumption that encrypted clients cannot attack each other because they've been cryptographically isolated.

[...] The researchers found that these vulnerabilities exist in five popular home routers — Netgear Nighthawk x6 R8000, Tenda RX2 Pro, D-LINK DIR-3040, TP-Link Archer AXE75, and Asus RT-AX57 — two open-source firmwares — DD-WRT v3.0-r44715 and OpenWrt 24.10 — and across two university enterprise networks. This shows that the issue is not just limited to how manufacturers make and program their routers. Instead, it’s a problem with Wi-Fi itself, where its architecture is vulnerable to attackers who know how to take advantage of its flaws.

While this may sound bad, the researchers pointed out that this type of attack is rather complicated, especially with how complicated modern wireless networks have become. Still, that does not mean that manufacturers and standardization groups should ignore this problem. The group hoped that this revelation would force the industry to come together and create a rigorous set of requirements for client isolation and avoid this flaw in the future.


Original Submission

posted by hubie on Monday March 02, @01:11AM   Printer-friendly

https://www.slashgear.com/2107938/removable-battery-phones-making-comeback/

Many of today's mobile phones, like the slim iPhone Air, are lightweight and sleek, with an advanced design and the latest in modern technology. It's a far cry from previous models, which were bulkier, had buttons, and bulged in your pocket. But while mobile phones have evolved over the years, the current fixed-battery design is reverting to its old form, thanks to legislation from the European Union (EU). Based on these new guidelines, phones will once again need batteries that can be safely removed and replaced by the user.

The EU's legislation also mandates that replacement batteries, while meeting the device's technical specifications, not be bound by proprietary limits. This means that a phone must be able to accept a compatible battery that meets the device's safety and technical standards, whether or not it's manufacturer-branded. Plus, replacement batteries must be available to the user for at least 5 to 7 years following a model's end of production. The EU has placed a date of February 18, 2027, for these expectations to be met.

[...] The EU's new legislation requiring smartphones to have removable batteries accomplishes a few different things. First, allowing users to replace a spent battery with a new one helps extend the life of the device before its final disposal. Plus, it also enables battery repair or replacement without throwing out the entire phone. By giving users this capability, the rules are meant to encourage reuse of existing phones and help cut down on electronic waste.

[...] But if removable batteries become the norm once again, then phone design could take a step backward in terms of overall construction. That's because cases may need to become thicker to accommodate the removable batteries, and additional safety features would need to be added to protect the new design as well. Until the top phone manufacturers reveal newer models to satisfy the EU's standards, it's unclear what changes users can expect to see.


Original Submission

posted by janrinok on Sunday March 01, @08:22PM   Printer-friendly

https://arstechnica.com/science/2026/02/genomes-chart-the-history-of-neanderthal-modern-human-interactions/

By now, it's firmly established that modern humans and their Neanderthal relatives met and mated as our ancestors expanded out of Africa, resulting in a substantial amount of Neanderthal DNA scattered throughout our genome. Less widely recognized is that some of the Neanderthal genomes we've seen have pieces of modern human DNA as well.

Not every modern human has the same set of Neanderthal DNA, however; different people will, by chance, have inherited different fragments. But there are also some areas, termed "Neanderthal deserts," where none of the Neanderthal DNA seems to have persisted. Notably, the largest Neanderthal desert is the entire X chromosome, raising questions about whether this reflects the evolutionary fitness of genes there or mating preferences.

Now, three researchers at the University of Pennsylvania, Alexander Platt, Daniel N. Harris, and Sarah Tishkoff, have done the converse analysis: examining the X chromosomes of the handful of completed Neanderthal genomes we have. It turns out there's also a strong bias toward modern human sequences there, as well, and the authors interpret that as selective mating, with Neanderthal males showing a strong preference for modern human females and their descendants.

Given how long modern humans and Neanderthals had been evolving as separate populations, some degree of genetic incompatibility is definitely possible. Lots of proteins interact in various ways, and the genes behind these interaction networks will evolve together—a change in one gene will often lead to compensatory changes in other genes in the network. Over time, those changes may mean re-introducing the original gene will actually disrupt the network, with a negative impact on fitness.

That means the introduction of some Neanderthal genes into the modern human genome (or vice versa) would be disruptive and make carriers of them less fit. So they'd be selected against and lost over the ensuing generations. Of course, some segments would likely be lost at random—the genome's pretty big, and the modern human population was likely large and growing, allowing its DNA to dilute out the influence of other human populations. Figuring out which influence is dominant can be challenging.

One way to sort this out is to make the same comparison with Neanderthal genomes. If a Neanderthal gene is disruptive in a modern human context, then it's likely that the modern human version will be disruptive in Neanderthals. And, in fact, that's what we seem to see: A look at one Neanderthal genome found that there's some correlation between the Neanderthal deserts in the human genome and the human deserts in that Neanderthal.

All of that, however, doesn't go far to explain the fact that the X chromosome looks like a giant Neanderthal desert, with long stretches of nothing but modern human DNA. The genetics of the X is complicated by the fact that males inherit a single copy from their mothers, so they have only a single copy of almost every gene on it. If any of those genes are causing problems, they will be quickly selected against in males.

Thus, evolutionary selection against the Neanderthal X is definitely an option. The alternative they consider is that it's the product of biased matings. If most mating between the two groups was biased in some way, it could skew the frequency with which the X chromosome was inherited. For example, if most of the matings involved Neanderthal males and modern human females, then you would have fewer Neanderthal X's around as a result, since only half of a male's offspring will inherit an X chromosome from them.

To figure out which result might be the case, the researchers again turned to the three Neanderthal genomes we have available, looking at the pattern of inheritance along the X chromosome. That was compared to X chromosomes from African populations that have very little Neanderthal DNA.

The results contrasted sharply with what was seen elsewhere in the genome, where Neanderthal deserts in modern humans correspond to human deserts in the Neanderthal genome. Instead, the X chromosome in Neanderthals tended to have an excess of modern human sequences—exactly as you see in modern humans. It appears that the modern human X ended up more common in both human and Neanderthal populations.

Could this be from evolutionary selection for something favorable about it? The researchers found that modern human DNA found on the Neanderthal X had a lower than average frequency of important sequences like those that regulate nearby genes or code for proteins. While that doesn't rule out evolutionary selection as a factor, it does make it seem a bit less likely, since there's less indication that the DNA being kept around is functional.

That leaves preferential mating as a more probable explanation. But the modern human DNA was present at such a high frequency on the X that it's difficult to explain by a simple preference of Neanderthal males for modern human females. Instead, you'd have to have a continued preference for the offspring of these matches as well. "We did not rule out more complicated scenarios combining selection and sex biases, such as natural selection acting as a modifying force on top of the strong signature left by sex bias," the authors also note.

Overall, we're left with a picture of a relatively large number of matings between male Neanderthals and modern human females. The offspring of these matings ended up in both the modern human and Neanderthal populations; in the latter, their offspring were favored enough to have led to an excess contribution to the X chromosome.

Science, 2026. "Interbreeding between Neanderthals and modern humans was strongly sex biased"
DOI: 10.1126/science.aea6774 (Source).


Original Submission

posted by janrinok on Sunday March 01, @03:37PM   Printer-friendly

https://www.slashgear.com/2109851/states-cracking-down-drivers-move-over-laws/5def81c4f1d3d85733888dd4951cd6f1

There is a trend in a variety of states across the U.S. to crack down on those motorists who do not observe "Slow Down, Move Over" laws that require drivers to reduce their speed and clear the lane next to emergency responders and motorists who are parked on the shoulders of highways. (In fact, the dangers involved in emergency situations mean some first responders now have robots to assist them.) As of 2012, every state in the union had one of these laws, but they provided protection only to fire, police, and ambulance personnel. As time went on, a number of states expanded these laws to cover road crews, utility vehicles, and tow trucks. The latest development involves expanding these laws to cover anyone who finds themselves stuck by the side of the road.

According to the Emergency Responder Safety Institute, a total of 46 persons responding to a roadside emergency lost their lives while helping people by the roadside during 2024. This was in spite of the presence of Slow Down, Move Over laws being in effect in every single state.

A recent study put out by the AAA Foundation for Traffic Safety has revealed that, because of both drivers' lack of compliance and poor understanding about these laws, more than one-third of all drivers do not slow down or move over when workers are present on the roadside. The other two-thirds either changed lanes or reduced their speed but did not do both. The study found that fewer motorists slowed down or moved over for tow trucks, while a higher percentage conformed with this behavior for police vehicles.

AAA, which offers many services you might have not known about, provided several recommendations for improving the public's awareness of and compliance with these Slow Down, Move Over laws that already exist in every state. It recommends that all 50 states' laws be standardized to give protection to all types of vehicles on the roadside and to any person who happens to end up there. It also suggests a public education campaign that starts with driver's ed. classes and reaches older drivers through public service announcements, navigation apps, and roadway signs.

Finally, AAA acknowledged that more emphasis needs to be put on enforcement of these Slow Down, Move Over laws, with an initial emphasis on making the driving public more aware of these laws. This should then provide the desired result of having more drivers observe them by slowing down and moving over, which will save many more lives of first responders, roadside workers, and anyone else who finds themselves stuck on the side of the road. (Just in case that person is you, you may want to check out these Milwaukee apparel items to keep safe.) And that's a good thing.

Slow Down, Move Over laws already exist in every U.S. state that you will ever drive through. So it only makes sense to be alert for any vehicles stopped by the roadside, slow down if you see any, and move over whenever possible to give them some space to do what they need to do and get home safely.


Original Submission

posted by hubie on Sunday March 01, @10:52AM   Printer-friendly

Neuron-powered computer chips can now be easily programmed to play a first-person shooter game, bringing biological computers a step closer to useful applications:

A clump of human brain cells can play the classic computer game Doom. While its performance is not up to par with humans, experts say it brings biological computers a step closer to useful real-world applications, like controlling robot arms.

In 2021, the Australian company Cortical Labs used its neuron-powered computer chips to play Pong. The chips consisted of clumps of more than 800,000 living brain cells grown on top of microelectrode arrays that can both send and receive electrical signals. Researchers had to carefully train the chips to control the paddles on either side of the screen.

Now, Cortical Labs has developed an interface that makes it easier to program these chips using the popular programming language Python. An independent developer, Sean Cole, then used Python to teach the chips to play Doom, which he did in around a week.

"Unlike the Pong work that we did a few years ago, which represented years of painstaking scientific effort, this demonstration has been done in a matter of days by someone who previously had relatively little expertise working directly with biology," says Brett Kagan of Cortical Labs. "It’s this accessibility and this flexibility that makes it truly exciting."

The neuronal computer chip, which used about a quarter as many neurons as the Pong demonstration, played Doom better than a randomly firing player, but far below the performance of the best human players. However, it learnt much faster than traditional, silicon-based machine learning systems and should be able to improve its performance with newer learning algorithms, says Kagan.

However, it's not useful to compare the chips with human brains, he says. "Yes, it's alive, and yes, it’s biological, but really what it is being used as is a material that can process information in very special ways that we can’t recreate in silicon."

[...] Even so, the jump in capability is exciting, says Yoshikatsu Hayashi at the University of Reading, UK, and brings us significantly closer to useful real-world applications, such as controlling a robotic arm with biological computers, a task which Hayashi and his colleagues are attempting with a similar computer made from jelly-like hydrogel. "[Playing Doom] is like a simpler version of controlling a whole arm," says Hayashi.

"What's exciting here is not just that a biological system can play Doom, but that it can cope with complexity, uncertainty, and real-time decision-making," says Adamatzky. "That's much closer to the kinds of challenges future biological or hybrid computers will need to handle."


Original Submission

posted by jelizondo on Sunday March 01, @06:10AM   Printer-friendly
from the so-it-begins dept.

https://arstechnica.com/ai/2026/02/block-lays-off-40-of-workforce-as-it-goes-all-in-on-ai-tools/c16fbef0848a80413fcac6e5598b4dc9

Block, the fintech group headed by Twitter cofounder Jack Dorsey, will cut its workforce by "nearly half" in one of the clearest signs of the sweeping changes AI tools are having on employment.

Shares in the payment company soared more than 25 percent in after-hours trading on Thursday as it announced it would shed more than 4,000 jobs from its 10,000-strong workforce.

"Intelligence tools have changed what it means to build and run a company. We're already seeing it internally," Dorsey wrote in a letter to shareholders.

"A significantly smaller team, using the tools we're building, can do more and do it better. And intelligence tool capabilities are compounding faster every week."

Dorsey, who left his role as CEO of Twitter in 2021, is among the first Silicon Valley chiefs to explicitly tie huge job cuts to the ability of AI to replace human workers.

Amazon has sought to play down the link to AI after announcing lay-offs totaling 30,000 roles since October, months after CEO Andy Jassy warned the technology would mean "fewer people doing some of the jobs that are being done today" in the coming years, especially in white-collar roles.

Dorsey said he did not think he was early to the realization about the effect that AI could have on work, but that "most companies are late."

He said he expected a "majority of companies" would reach the same conclusion within the next year and make similar structural changes.

The staff reduction at Block comes as anxiety rises about AI leading to job losses across vast parts of the economy.

Investors and economists are grappling with an influx of US economic data and corporate announcements in an effort to gauge the impact the technology could be having on the labor market. The latest non-farm payrolls figures were better than expected, suggesting the domestic jobs market was stabilizing, but several big US companies have committed to cutting staff.

Amazon, UPS, Dow, Nike, Home Depot, and others in late January announced they would be cutting a combined 52,000 jobs.

Dorsey said the cuts at Block, which owns the payment processor Square, came despite what he described as a "strong" financial performance in 2025.

Block has made a contrarian bet on bitcoin at a time when many payment companies favored stablecoins: cash-like digital tokens that became regulated in the US last year.

Block's strategy was spearheaded by Dorsey, a "bitcoin maximalist" who has said he believes the digital currency will eventually eclipse the dollar.

The company offers payment services in bitcoin for merchants and consumers—and suffered a loss on its own bitcoin holdings as the price of the cryptocurrency dropped 23 percent this year.

In contrast, payment companies that made a bet on stablecoins experienced a boost. Stripe earlier this week said its stablecoin transaction volumes increased fourfold last year.

In its fiscal fourth quarter, Block reported revenue of almost $6.3 billion, in line with Wall Street expectations. Its earnings tumbled to 19 cents a share, owing to a $234 million hit on its bitcoin holdings.


Original Submission

posted by jelizondo on Sunday March 01, @01:27AM   Printer-friendly

https://www.slashgear.com/2105562/us-military-c-17-airlifts-advanced-nuclear-reactor-california-to-utah/

February 15, 2026 was a historic day for the United States military and the future of nuclear reactors in the United States. That's the day United States Air Force personnel and civilian contractors worked together to load a 5-megawatt nuclear reactor onto a USAF C-17 Globemaster III, an airplane some call the Moose. It was the first time a nuclear reactor was airlifted by a C-17. The March Air Reserve Base, located about 60 miles east of Los Angeles, is home to the 452d Air Mobility Wing, operating a squadron of C-17s among other weapons.

The nuclear reactor, a Ward 250 micro-reactor, fits neatly into the back of the plane, making it easily transportable. The history-making flight saw the Ward 250 transported from March to Utah's Hill Air Force Base, about 30 miles north of Salt Lake City. From there, it'll make its way to the Utah San Rafael Energy Lab where it will undergo more testing and evaluation, according to a press release issued by the U.S. Department of War.

On August 12, 2025, just six months before the historic flight aboard the C-17 Globemaster III, Isaiah Taylor issued a press release announcing his company, Valar Atomics, had been selected to participate in "the President's accelerated nuclear program." The nation's renewed interest in nuclear energy led to a May 23, 2025 Executive Order directing the US Army to build a nuclear microreactor and provide nuclear energy to a domestic military installation by September 30, 2028, and private-sector programs such as the Ward 250.

Valar Atomics developed its WardZero prototype in Los Angeles months before Executive Order 14301 was signed in May of 2025. The US Department of Energy selected the Valar Atomics Ward 250 as one of the projects poised to "achieve criticality on American soil by July 4th, 2026," as directed by section five of EO 14301.

The 5-megawatt Ward 250 nuclear microreactor is about the size of a large van. The power generated by the small package is enough power to service an estimated 5,000 homes or a sizable military installation. With the rapid-deployment capability demonstrated by the February 15th flight, the Ward 250 will eliminate the need for military operations to rely on civilian power grids or diesel-powered generators anywhere in the world.

The USAF Boeing C-17 Globemaster III is a capable aircraft designed to safely transport cargo and military personnel. While the C-5 Galaxy is larger, the C-17 squadron located at March Air Reserve Base offered close proximity to Valar Atomics where the Ward 250 was built and its capabilities closely matching the payload requirements.


Original Submission

posted by jelizondo on Saturday February 28, @08:41PM   Printer-friendly

OpenAI has closed a new funding round that could total $110 billion, valuing the ChatGPT maker at $730 billion pre-money and potentially putting it on course for an IPO in the second half of the year:

The new funding round comes on top of the $40 billion already on OpenAI's balance sheet, giving the company more runway to rapidly expand and develop new models and AI infrastructure. OpenAI expects to remain unprofitable until 2030, when management forecasts it will turn free cash flow positive.

In a separate release, Amazon detailed its major multi-year partnership with OpenAI, centered on enterprise AI infrastructure, distribution, and custom model development.

Here are the highlights of the Amazon-OpenAI investment:

  • Amazon will invest $50 billion in OpenAI, with $15 billion upfront and another $35 billion later if certain conditions are met.
  • AWS and OpenAI will jointly build a "Stateful Runtime Environment" powered by OpenAI models and offered through Amazon Bedrock, aimed at helping customers run AI apps and agents with persistent context, memory, tool access, and compute.
  • AWS becomes the exclusive third-party cloud distribution provider for OpenAI Frontier, OpenAI's enterprise platform for building and managing teams of AI agents.
  • OpenAI will expand its AWS infrastructure commitment by $100 billion over 8 years, on top of an existing $38 billion agreement.
  • As part of that, OpenAI will use roughly 2 gigawatts of AWS Trainium capacity, spanning Trainium3 and future Trainium4 chips, to support Frontier, Stateful Runtime, and other advanced workloads.
  • OpenAI and Amazon will also develop custom OpenAI-based models for Amazon's customer-facing apps, giving Amazon teams another model option alongside its in-house Nova family.

"OpenAI and Amazon share a belief that AI should show up in ways that are practical and genuinely useful for people," OpenAI boss Sam Altman stated, adding, "Combining OpenAI's models with Amazon's infrastructure and global reach helps us put powerful AI into the hands of businesses and users at real scale."

Altman commented on today's announcement, saying, "As long as revenue keeps growing, the deals are not circular."

Previously:


Original Submission

posted by jelizondo on Saturday February 28, @03:59PM   Printer-friendly

https://osmand.net/blog/fast-routing/

Offline navigation is a lifeline for travelers, adventurers, and everyday commuters. We demand speed, accuracy, and the flexibility to tailor routes to our specific needs. For years, OsmAnd has championed powerful, feature-rich offline maps that fit in your pocket. But as maps grew more detailed and user demands for complex routing increased, our trusty A* algorithm, despite its flexibility, started hitting a performance wall. How could we deliver a 100x speed boost without bloating map sizes or sacrificing the deep customization our users love?

The answer: OsmAnd's custom-built Highway Hierarchy (HH) Routing. This isn't your standard routing engine; it's a ground-up redesign, meticulously engineered to overcome the unique challenges of providing advanced navigation on compact, offline-first map data.


Original Submission

posted by mrpg on Saturday February 28, @11:11AM   Printer-friendly
from the G7 dept.

https://www.irregular.com/publications/vibe-password-generation

To security practitioners, the idea of using LLMs to generate passwords may seem silly. Secure password generation is nuanced, and requires care to implement correctly; the random seed, the source of entropy, the mapping of random output to password characters, and even the random number generation algorithm must be chosen carefully in order to prevent critical password recovery attacks. Moreover, password managers (generators and vaults) have been around for decades, and this is exactly what they’re designed to do.

At the heart of any strong password generator is a cryptographically-secure pseudorandom number generator(CSPRNG), responsible for generating the password characters in such a way that they are very hard to predict, and are drawn from a uniform probability distribution over all possible characters.

Conversely, the LLM output token sampling process is designed to do exactly the opposite. Basically, all LLMs do is iteratively predict the next token; the random generation of tokens is, by definition, predictable (with the token probabilities decided by the LLM), and the probability distribution over all possible tokens is very far from uniform.

In spite of this, LLM-generated passwords are likely to be generated and used. First, with the explosive growth and significant improvement in capabilities of AI over the past year (which, at Irregular, we have also seen direct evidence of in the offensive security domain), AI is much more accessible to less technologically-inclined users. Such users may not know secure methods for password generation, not place importance on them, and rely on ubiquitous AI tools to generate a password instead of looking for a specialized tool, such as a password manager. Moreover, while LLM-generated passwords are insecure, they appear strong and secure to the untrained eye, exacerbating this issue and reducing the likelihood that users will avoid these passwords.

Furthermore, with the recent surge in popularity of coding agents and vibe-coding tools, people are increasingly developing software without looking at the code. We’ve seen that these coding agents are prone to using LLM-generated passwords without the developer’s knowledge or choice. When users don’t review the agent actions or the resulting source code, this “vibe-password-generation” is easy to miss.

TFA shows results obtained using several major LLMs, including GPT, Claude, and Gemini in their latest versions and most powerful variations, and found that all of them generate weak passwords.

Originally spotted on Schneier on Security.


Original Submission

posted by mrpg on Saturday February 28, @06:30AM   Printer-friendly
from the piratas-informáticos dept.

A single attacker used Anthropic's Claude and OpenAI's ChatGPT to compromise nine Mexican government agencies, stealing 195 million taxpayer records and voter data:

On February 25, 2026, Bloomberg published a story that would have sounded like fiction two years ago. A lone hacker, with no apparent ties to any government, used Anthropic's Claude chatbot to orchestrate a cyberattack against Mexico's federal and state government agencies. The campaign lasted roughly six weeks, from late December 2025 through January 2026. By the time it was over, the attacker had stolen 150 gigabytes of sensitive data -- including 195 million taxpayer records, voter registration files, government employee credentials, and civil registry data.

The hacker did not use custom malware. They did not deploy a zero-day exploit. They used a consumer AI subscription and a set of carefully written Spanish-language prompts. The AI did the rest.

The breach was uncovered not by any of the affected agencies, but by Gambit Security, an Israeli cybersecurity startup whose researchers stumbled onto publicly accessible conversation logs showing exactly how the attacker coaxed Claude into becoming an offensive hacking assistant. The paper trail was remarkably detailed -- a step-by-step record of how guardrails were tested, resisted, and ultimately bypassed.

"This reality is changing all the game rules we have ever known," said Alon Gromakov, Gambit Security's co-founder and CEO.

TFA goes on to list what was stolen, how Claude was weaponized and how the affected entities responded.


Original Submission

posted by mrpg on Saturday February 28, @01:40AM   Printer-friendly
from the the-failure-is-the-system dept.

Hackers Expose The Massive Surveillance Stack Hiding Inside Your “Age Verification” Check:

We’ve been saying this for years now, and we’re going to keep saying it until the message finally sinks in: mandatory age verification creates massive, centralized honeypots of sensitive biometric data that will inevitably be breached. Every single time.

[...] A couple weeks ago, Discord announced it would launch “teen-by-default” settings for its global audience, meaning all users would be shunted into a restricted experience unless they verified their age through biometric scanning. The internet, predictably, was not thrilled. But while many users were busy venting their frustration, a group of security researchers decided to do something more useful: they took a look under the hood at Persona, one of the companies Discord was using for verification (specifically for users in the UK).

[...] Let me say that again: 2,456 publicly accessible files sitting on a government-authorized server, exposed to the open internet. Files that revealed a system performing not a simple age check, but a ton of potentially intrusive checks:

Once a user verifies their identity with Persona,the software performs 269 distinct verification checks and scours the internet and government sources for potential matches, such as by matching your face to politically exposed persons (PEPs), and generating risk and similarity scores for each individual. IP addresses, browser fingerprints, device fingerprints, government ID numbers, phone numbers, names, faces, and even selfie backgrounds are analyzed and retained for up to three years.

[...] Discord, to its credit, has now said it will not be proceeding with Persona for identity verification. And to be fair, Discord and similar internet companies are in an impossible position here—facing mounting regulatory pressure in multiple jurisdictions to verify ages while being handed a market of vendors who keep turning out to be security nightmares. But this is part of a pattern that should be deeply familiar by now.

[...] See the pattern? Discord keeps swapping vendors like someone frantically rotating buckets under a leaking roof, apparently hoping the next bucket won’t have a hole in it. But the problem was never the bucket. The problem is the hole in the roof — the never-ending stream of age-verification government mandates.

And this brings us to the bigger, more important point that almost nobody in the “protect the children” policy crowd seems willing to engage with honestly. Every single time you mandate age verification, you are mandating the creation of a centralized database of extraordinarily sensitive personal information. Government IDs. Biometric facial data. The kind of data that, once breached, cannot be “changed” like a password. You get one face. You get one government ID number. When those leak—and they will leak—the damage is permanent.

[...] We have been cataloging these breaches for years. In 2024, Australia greenlit an age verification pilot, and hours later a mandated verification database for bars was breached. That same year, another ID verification service was breached, exposing private info collected on behalf of Uber, TikTok, and more. Then came the Discord vendor breach last year. And now Persona.


Original Submission